414 research outputs found

    Sankofa Project

    Get PDF
    Our class, the Sankofa Project, traveled to the Cleveland School of The Arts for a week long residency. This is our final project, a book that is full of interviews and artwork from our trip. We did all of this work remotely for the culminating group project remotely.https://digital.kenyon.edu/covid19words/1051/thumbnail.jp

    Large-Scale Goodness Polarity Lexicons for Community Question Answering

    Full text link
    We transfer a key idea from the field of sentiment analysis to a new domain: community question answering (cQA). The cQA task we are interested in is the following: given a question and a thread of comments, we want to re-rank the comments so that the ones that are good answers to the question would be ranked higher than the bad ones. We notice that good vs. bad comments use specific vocabulary and that one can often predict the goodness/badness of a comment even ignoring the question, based on the comment contents only. This leads us to the idea to build a good/bad polarity lexicon as an analogy to the positive/negative sentiment polarity lexicons, commonly used in sentiment analysis. In particular, we use pointwise mutual information in order to build large-scale goodness polarity lexicons in a semi-supervised manner starting with a small number of initial seeds. The evaluation results show an improvement of 0.7 MAP points absolute over a very strong baseline and state-of-the art performance on SemEval-2016 Task 3.Comment: SIGIR '17, August 07-11, 2017, Shinjuku, Tokyo, Japan; Community Question Answering; Goodness polarity lexicons; Sentiment Analysi

    Towards Semi-Automated Annotation for Prepositional Phrase Attachment

    Get PDF
    This paper investigates whether high-quality annotations for tasks involving semantic disambiguation can be obtained without a major investment in time or expense. We examine the use of untrained human volunteers from Amazonā€™s Mechanical Turk in disambiguating prepositional phrase (PP) attachment over sentences drawn from the Wall Street Journal corpus. Our goal is to compare the performance of these crowdsourced judgments to the annotations supplied by trained linguists for the Penn Treebank project in order to indicate the viability of this approach for annotation projects that involve contextual disambiguation. The results of our experiments show that invoking majority agreement between multiple human workers can yield PP attachments with fairly high precision, conļ¬rming that this crowdsourcing approach to syntactic annotation holds promise for the generation of training corpora in new domains and genres
    • ā€¦
    corecore